24 research outputs found

    Resource Efficient Hardware Architecture for Fast Computation of Running Max/Min Filters

    Get PDF
    Running max/min filters on rectangular kernels are widely used in many digital signal and image processing applications. Filtering with a k×k kernel requires of k2−1 comparisons per sample for a direct implementation; thus, performance scales expensively with the kernel size k. Faster computations can be achieved by kernel decomposition and using constant time one-dimensional algorithms on custom hardware. This paper presents a hardware architecture for real-time computation of running max/min filters based on the van Herk/Gil-Werman (HGW) algorithm. The proposed architecture design uses less computation and memory resources than previously reported architectures when targeted to Field Programmable Gate Array (FPGA) devices. Implementation results show that the architecture is able to compute max/min filters, on 1024×1024 images with up to 255×255 kernels, in around 8.4 milliseconds, 120 frames per second, at a clock frequency of 250 MHz. The implementation is highly scalable for the kernel size with good performance/area tradeoff suitable for embedded applications. The applicability of the architecture is shown for local adaptive image thresholding

    Block-synchronous Harmonic Control for Scalable Trajectory Planning

    Get PDF
    ISBN : 978-953-7619-20-6Trajectory planning consists in finding a way to get from a starting position to a goal position while avoiding obstacles within a given environment or navigation space. Harmonic functions may be used as potential fields for trajectory planning. Such functions do not have local extrema, so that control algorithms may reduce to locally descend the potential field until reaching a minimum, when obstacles correspond to maxima of the potential and goals correspond to minima. This chapter presents a parallel hardware implementation of this navigation method on reconfigurable digital circuits. Trajectories are estimated after the iterated computation of the harmonic function, given the goal and obstacle positions of the navigation problem. The proposed massively distributed implementation locally computes the direction to choose to get to the goal position at any point of the environment. Changes in this environment may be immediately taken into account, for example when obstacles are discovered during an on-line exploration. To fit real-world applications, our implementation has been designed to deal with very large navigation environments while optimizing computation time

    Fault Tolerance of Self Organizing Maps

    Get PDF
    International audienceAs the quest for performance confronts resource constraints, major breakthroughs in computing efficiency are expected to benefit from unconventional approaches and new models of computation such as brain-inspired computing. Beyond energy, the growing number of defects in physical substrates is becoming another major constraint that affects the design of computing devices and systems. Neural computing principles remain elusive, yet they are considered as the source of a promising paradigm to achieve fault-tolerant computation. Since the quest for fault tolerance can be translated into scalable and reliable computing systems, hardware design itself and the potential use of faulty circuits have motivated further the investigation on neural networks, which are potentially capable of absorbing some degrees of vulnerability based on their natural properties. In this paper, the fault tolerance properties of Self Organizing Maps (SOMs) are investigated. To asses the intrinsic fault tolerance and considering a general fully parallel digital implementations of SOM, we use the bit-flip fault model to inject faults in registers holding SOM weights. The distortion measure is used to evaluate performance on synthetic datasets and under different fault ratios. Additionally, we evaluate three passive techniques intended to enhance fault tolerance of SOM during training/learning under different scenarios

    Low-cost hardware implementations for discrete-time spiking neural networks

    Get PDF
    In this paper, both GPU (Graphing Processing Unit) based and FPGA (Field Programmable Gate Array) based hardware implementations for a discrete-time spiking neuron model are presented. This generalized model is highly adapted for large scale neural network implementations, since its dynamics are entirely represented by a spike train (binary code). This means that at microscopic scale the membrane potentials have a one-to-one correspondence with the spike train, in the asymptotic dynamics. This model also permit us to reproduce complex spiking dynamics such as those obtained with general Integrate-and-Fire (gIF) models. The FPGA design has been coded in Handel-C and VHDL and has been based on a fixed-point reconfigurable architecture, while the GPU spiking neuron kernel has been coded using C++ and CUDA. Numerical verifications are provided

    Stochastic and Asynchronous Spiking Dynamic Neural Fields

    Get PDF
    International audienceBio-inspired neural computation attracts a lot of attention as a possible solution for the future challenges in designing computational resources. Dynamic neural fields (DNF) provide cortically inspired models of neural populations to which computation can be applied for a wide variety of tasks, such as perception and sensorimotor control. DNFs are often derived from continuous neural field theory (CNFT). In spite of the parallel structure and regularity of CNFT models, few studies of hardware implementations have been carried out targeting embedded real-time processing. In this article, a hardware-friendly model adapted from the CNFT is introduced, namely the RSDNF model (randomly spiking dynamic neural fields). Thanks to their simplified 2D structure, RSDNFs achieve scalable parallel implementations on digital hardware while maintaining the behavioral properties of CNFT models. Spike-based computations within neurons in the field are introduced to reduce interneuron connection bandwidth. Additionally, local stochastic spike propagation ensures inhibition and excitation broadcast without a fully connected network. The behavioral soundness and robustness of the model in the presence of noise and distracters is fully validated through software and hardware. A field programmable gate array (FPGA) implementation shows how the RSDNF model ensures a level of density and scalability out of reach for previous hardware implementations of dynamic neural field models

    Embedded harmonic control for trajectory planning in large environments

    Get PDF
    International audienceThis paper presents an embedded FPGA­based architecture to compute navigation trajectories along a harmonic potential. The goals and obstacles may be changed during computation. Large environments are split into blocks. This approach, together with the use of an increasing precision, enables an optimization of the overall computation time that is theoretically and experimentally studied. Implementation results confirm outstanding speedup factors

    Low-cost hardware implementations for discrete-time spiking neural networks

    Get PDF
    In this paper, both GPU (Graphing Processing Unit) based and FPGA (Field Programmable Gate Array) based hardware implementations for a discrete-time spiking neuron model are presented. This generalized model is highly adapted for large scale neural network implementations, since its dynamics are entirely represented by a spike train (binary code). This means that at microscopic scale the membrane potentials have a one-to-one correspondence with the spike train, in the asymptotic dynamics. This model also permit us to reproduce complex spiking dynamics such as those obtained with general Integrate-and-Fire (gIF) models. The FPGA design has been coded in Handel-C and VHDL and has been based on a fixed-point reconfigurable architecture, while the GPU spiking neuron kernel has been coded using C++ and CUDA. Numerical verifications are provided

    Massively distributed implementation of a spiking neural network for image segmentation on FPGA

    No full text
    International audienceNumerous neural network hardware implementations now use digital reconfigurable devices such as Field Programmable Gate Arrays (FPGAs) thanks to an interesting compromise between the hardware efficiency of Application Specific Integrated Circuits (ASICs) and the flexibility of a simple software-like handling. Another current trend of neural research focuses on elementary neural mechanisms such as spiking neurons. Their rather simple and asynchronous behavior have motivated several implementations on analog devices, whereas digital implementations appear as quite unable to handle large spiking neural networks, for lack of density. In this paper, we develop an optimized FPGA implementation of a standard spiking model (LEGION) of integrate-and-fire neurons, used for sequence image segmentation. Despite previous research, little progress has been made in building successful neural systems for image segmentation in digital hardware. This work shows that digital and flexible solutions may efficiently handle large networks of spiking neurons
    corecore